我们引进AlphaD3M,自动机器学习(AutoML)系统基于元加固使用序列模型自寓教于乐。AlphaD3M是基于编辑操作过机器学习管道原语提供explainability执行。我们比较AlphaD3M与国家的最先进的AutoML系统:Autosklearn,Autostacker和TPOT,在OpenML数据集。AlphaD3M实现竞争力的性能,同时一个数量级的速度更快,减少计算时间从几小时缩短到几分钟,并且是由设计可解释的。
translated by 谷歌翻译
底面图像中的自动化视盘(OD)和光杯(OC)分割与有效测量垂直杯盘比率(VCDR)是一种在眼科中常用的生物标志物,以确定胶状神经神经病变的程度。通常,这是使用粗到1的深度学习算法来解决的,其中第一阶段近似于OD,第二阶段使用该区域的作物来预测OD/OC掩码。尽管这种方法广泛应用于文献中,但尚无研究来分析其对结果的真正贡献。在本文中,我们介绍了使用5个公共数据库的不同粗到精细设计的全面分析,包括从标准分割的角度以及估算青光眼评估的VCDR。我们的分析表明,这些算法不一定超过标准的多级单阶段模型,尤其是当这些算法是从足够大而多样化的训练集中学习的。此外,我们注意到粗糙阶段比精细的OD分割结果更好,并且在第二阶段提供OD监督对于确保准确的OC掩码至关重要。此外,在多数据集设置上训练的单阶段和两阶段模型都表现出对成对的结果,甚至比其他最先进的替代方案更好,同时排名第一的OD/OC分段。最后,我们评估了VCDR预测的模型与Airogs图像子集中的六个眼科医生相比,以在观察者间可变性的背景下理解它们。我们注意到,即使从单阶段和粗至细节模型中恢复的VCDR估计值也可以获得良好的青光眼检测结果,即使它们与专家的手动测量不高度相关。
translated by 谷歌翻译
巴西最高法院每学期收到数万案件。法院员工花费数千个小时来执行这些案件的初步分析和分类 - 这需要努力从案件管理工作流的后部,更复杂的阶段进行努力。在本文中,我们探讨了来自巴西最高法院的文件多模式分类。我们在6,510起诉讼(339,478页)的新型多模式数据集上训练和评估我们的方法,并用手动注释将每个页面分配给六个类之一。每个诉讼都是页面的有序序列,它们既可以作为图像存储,又是通过光学特征识别提取的相应文本。我们首先训练两个单峰分类器:图像上对Imagenet进行了预先训练的重新编织,并且图像上进行了微调,并且具有多个内核尺寸过滤器的卷积网络在文档文本上从SCRATCH进行了训练。我们将它们用作视觉和文本特征的提取器,然后通过我们提出的融合模块组合。我们的融合模块可以通过使用学习的嵌入来处理缺失的文本或视觉输入,以获取缺少数据。此外,我们尝试使用双向长期记忆(BILSTM)网络和线性链条件随机字段进行实验,以模拟页面的顺序性质。多模式方法的表现都优于文本分类器和视觉分类器,尤其是在利用页面的顺序性质时。
translated by 谷歌翻译
Delimiting salt inclusions from migrated images is a time-consuming activity that relies on highly human-curated analysis and is subject to interpretation errors or limitations of the methods available. We propose to use migrated images produced from an inaccurate velocity model (with a reasonable approximation of sediment velocity, but without salt inclusions) to predict the correct salt inclusions shape using a Convolutional Neural Network (CNN). Our approach relies on subsurface Common Image Gathers to focus the sediments' reflections around the zero offset and to spread the energy of salt reflections over large offsets. Using synthetic data, we trained a U-Net to use common-offset subsurface images as input channels for the CNN and the correct salt-masks as network output. The network learned to predict the salt inclusions masks with high accuracy; moreover, it also performed well when applied to synthetic benchmark data sets that were not previously introduced. Our training process tuned the U-Net to successfully learn the shape of complex salt bodies from partially focused subsurface offset images.
translated by 谷歌翻译
搜索引擎的健康误导是一个可能对个人或公共卫生产生负面影响的重要问题。为了减轻问题,TREC组织了健康错误信息轨道。本文介绍了这条赛道的提交。我们使用BM25和域特定的语义搜索引擎来检索初始文档。后来,我们检查了健康新闻架构以获得质量评估,并将其应用于重新排名的文件。我们通过使用互酷等级融合将分数与不同组件合并。最后,我们讨论了未来作品的结果并结束。
translated by 谷歌翻译
社交媒体的普及创造了仇恨言论和性别歧视等问题。社交媒体中性别歧视的识别和分类是非常相关的任务,因为它们允许建立更健康的社会环境。尽管如此,这些任务很挑战。这项工作提出了一种使用多语种和单晶的BERT和数据点转换和与英语和西班牙语分类的策略的系统来使用多语种和单语的BERT和数据点转换和集合策略。它在社交网络中的性别歧视的背景下进行了2021年(存在2021年)任务,由Iberian语言评估论坛(Iberlef)提出。描述了所提出的系统及其主要组件,并进行深入的超公数分析。观察到的主要结果是:(i)该系统比基线模型获得了更好的结果(多语种伯爵); (ii)集合模型比单声道模型获得了更好的结果; (iii)考虑所有单独模型和最佳标准化值的集合模型获得了两个任务的最佳精度和F1分数。这项工作在两个任务中获得的第一名,最高的精度(任务1和任务2的0.658.780)和F1分数(对于任务1的任务1和F1-宏为0.780的F1二进制)。
translated by 谷歌翻译
本文介绍了我们参与西班牙语(戒毒)共享任务2021的评论中毒性的检测,在伊比利亚语语言评估论坛的第三次研讨会上。共享任务分为两个相关的分类任务:(i)任务1:毒性检测和; (ii)任务2:毒性水平检测。他们专注于毒性评论的传播加剧了仇外问题,在与移民有关的不同在线新闻文章中发布。减轻这个问题的必要努力之一是检测评论中的毒性。我们的主要目标是在竞赛的官方指标基于竞争的官方指标:任务1的F1分数和任务2的亲密评估度量(CEM)的F1分数以及任务2的CO-Score 。要解决任务,我们使用两种类型的机器学习模型:(i)统计模型和(ii)用于语言理解(BERT)模型的深双双向变压器。我们在使用BETO的两个任务中获得了最佳结果,这是一款位于大型西班牙语法上的BERT模型。我们在任务1中获得了第三名官方排名,F1分数为0.5996,我们在任务2官方排名的第6位与0.7142的CEM达成了第6位。我们的结果表明:(i)伯特模型获得比文本评论中毒性检测的统计模型更好的结果; (ii)单语伯特模型在其预先训练的语言中的文本评论中具有多语言伯特模型的优势。
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Due to the environmental impacts caused by the construction industry, repurposing existing buildings and making them more energy-efficient has become a high-priority issue. However, a legitimate concern of land developers is associated with the buildings' state of conservation. For that reason, infrared thermography has been used as a powerful tool to characterize these buildings' state of conservation by detecting pathologies, such as cracks and humidity. Thermal cameras detect the radiation emitted by any material and translate it into temperature-color-coded images. Abnormal temperature changes may indicate the presence of pathologies, however, reading thermal images might not be quite simple. This research project aims to combine infrared thermography and machine learning (ML) to help stakeholders determine the viability of reusing existing buildings by identifying their pathologies and defects more efficiently and accurately. In this particular phase of this research project, we've used an image classification machine learning model of Convolutional Neural Networks (DCNN) to differentiate three levels of cracks in one particular building. The model's accuracy was compared between the MSX and thermal images acquired from two distinct thermal cameras and fused images (formed through multisource information) to test the influence of the input data and network on the detection results.
translated by 谷歌翻译
The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
translated by 谷歌翻译